Ch.3 Linear Geometry

Return to TOC

Linear Geometry

Draw two unknown equations as lines, then three possibilities arise:

Unique Solution

No Solutions

Infinitely Many Solutions


Vectors

object with magnitude and direction; models displacement
vectors are equal iff same magnitude and same direction

Vector going from (a1,a2,...,an)(a_1,a_2,...,a_n) to (b1,b2,...,bn)(b_1,b_2,...,b_n) is
(b1a1b2a2bnan)=(b1b2bn)(a1a2an)\begin{pmatrix}b_1-a_1\\b_2-a_2\\\vdots\\b_n-a_n\end{pmatrix}=\begin{pmatrix}b_1\\b_2\\\vdots\\b_n\end{pmatrix}-\begin{pmatrix}a_1\\a_2\\\vdots\\a_n\end{pmatrix}

vector starting at origin and going a point is denoted with a vector of that point

Vector Operations


Linear Surfaces

Lines

Line in R2\mathbb{R}^2 through (1,2)(1,2) and (3,1)(3,1) is comprised of vectors
{(12)+t(21)|tR}\left\{\begin{pmatrix}1\\2\end{pmatrix}+t\begin{pmatrix}2\\-1\end{pmatrix}\middle| t\in\mathbb{R}\right\}
Vector associated with parameter tt, (21)\begin{pmatrix}2\\-1\end{pmatrix}, is a direction vector

Intuition

line through A=(a1,...,an)A=(a_1,...,a_n) and B=(b1,...,bn)B=(b_1,...,b_n) is one point on the line (eg AA) plus some multiple of the displacement vector (eg BAB-A)

Planes

Plane in R3\mathbb{R}^3 going through (a1,a2,a3)(a_1,a_2,a_3), (b1,b2,b3)(b_1,b_2,b_3), (c1,c2,c3)(c_1,c_2,c_3) consists of
{(a1a2a3)+t(b1a1b2a2b3a3)+s(c1a1c2a2c3a3)|t,sR}\left\{\begin{pmatrix}a_1\\a_2\\a_3\end{pmatrix}+t\begin{pmatrix}b_1-a_1\\b_2-a_2\\b_3-a_3\end{pmatrix}+s\begin{pmatrix}c_1-a_1\\c_2-a_2\\c_3-a_3\end{pmatrix}\middle|t,s\in\mathbb{R}\right\}

A set in the form {p+t1v1+t2v2++tkvk|t1,t2,...,tkR}\left\{\vec{p}+t_1\vec{v}_1+t_2\vec{v}_2+\cdots+t_k\vec{v}_k\middle| t_1,t_2,...,t_k\in\mathbb{R}\right\} where v1,v2,...,vkRnv_1,v_2,...,v_k\in\mathbb{R}^n and knk\le n is a kk-dimensional linear surface or kk-flat\rightarrow multi-dimensional extension of lines and planes


Metrics

Length

length of vRn\vec{v}\in\mathbb{R}^n is
v=v12++vn2|\vec{v}|=\sqrt{v_1^2+\cdots+v_n^2}
Clearly, av=av|a\vec{v}|=|a||\vec{v}| for all aRa\in\mathbb{R} and all vRn\vec{v}\in\mathbb{R}^n (note: v=v|-\vec{v}|=|\vec{v}|)

normalize vector v\vec{v} to unit length by taking v/v\vec{v}/|\vec{v}|

Dot Product

dot product or inner product or scalar product of two nn-component vectors is the linear combination of the components:
uv=u1v1+u2v2++unvn\vec{u}\cdot\vec{v}=u_1v_1+u_2v_2+\cdots+u_nv_n
Properties:

  1. vv=v2\vec{v}\cdot\vec{v}=|\vec{v}|^2
  2. uv=vu\vec{u}\cdot\vec{v}=\vec{v}\cdot\vec{u} for all u,vRn\vec{u},\vec{v}\in\mathbb{R}^n (symmetry)
  3. (au)(bv)=ab(uv)(a\vec{u})\cdot(b\vec{v})=ab(\vec{u}\cdot\vec{v}) for all a,bRa,b\in\mathbb{R} and all u,vRn\vec{u},\vec{v}\in\mathbb{R}^n
  4. (u1+u2)(v1+v2)=u1v1+u1v2+u2v1+u2v2(\vec{u}_1+\vec{u}_2)\cdot(\vec{v}_1+\vec{v}_2)=\vec{u}_1\cdot\vec{v}_1+\vec{u}_1\cdot\vec{v}_2+\vec{u}_2\cdot\vec{v}_1+\vec{u}_2\cdot\vec{v}_2 for all u1,u2,v1,v2Rn\vec{u}_1,\vec{u}_2,\vec{v}_1,\vec{v}_2\in\mathbb{R}^n
    3 and 4 together express bilinearity
Proofs
Property 1v=v1,v2,...,vnvv=v1,v2,...,vnv1,v2,...,vn=v12+v22++vn2=v2\vec{v}=\langle v_1,v_2,...,v_n\rangle\\ \vec{v}\cdot\vec{v}=\langle v_1,v_2,...,v_n\rangle\cdot\langle v_1,v_2,...,v_n\rangle\\ =v_1^2+v_2^2+\cdots+v_n^2=|\vec{v}|^2
Property 2

u=u1,u2,...,un,v=v1,v2,...,vnuv=u1v1+u2v2++unvn=v1u1+v2u2++vnun=vu\vec{u}=\langle u_1,u_2,...,u_n\rangle,\vec{v}=\langle v_1,v_2,...,v_n\rangle\\ \vec{u}\cdot\vec{v}=u_1v_1+u_2v_2+\cdots+u_nv_n\\ =v_1u_1+v_2u_2+\cdots+v_nu_n=\vec{v}\cdot\vec{u}

Property 3

u=u1,u2,...,un,v=v1,v2,...,vn(au)(bv)=au1,au2,...,aunbv1,bv2,...,bvnabu1v1+abu2v2++abunvn=ab(u1v1+u2v2++unvn)=ab(uv)\vec{u}=\langle u_1,u_2,...,u_n\rangle,\vec{v}=\langle v_1,v_2,...,v_n\rangle\\ (a\vec{u})\cdot(b\vec{v})=\langle au_1,au_2,...,au_n\rangle\cdot\langle bv_1,bv_2,...,bv_n\rangle\\ abu_1v_1+abu_2v_2+\cdots+abu_nv_n\\ =ab(u_1v_1+u_2v_2+\cdots+u_nv_n)=ab(\vec{u}\cdot\vec{v})

Property 4

u1=u11,...,u1n,u2=u21,...,u2n,v1=v11,...,v1n,v2=v21,...,v2n(u1+u2)(v1+v2)=u11+u21,...,u1n+u2nv11+v21,...,v1n+v2n=(u11+u21)(v11+v21)++(u1n+u2n)(v1n+v2n)=u11v11+u11v21+u21v11+u21v21++u1nv1n+u1nv2n+u2nv1n+u2nv2n=u1v1+u1v2+u2v1+u2v2\vec{u}_1=\langle u_{1_1},...,u_{1_n}\rangle,\vec{u}_2=\langle u_{2_1},...,u_{2_n}\rangle,\\ \vec{v}_1=\langle v_{1_1},...,v_{1_n}\rangle,\vec{v}_2=\langle v_{2_1},...,v_{2_n}\rangle\\ (\vec{u}_1+\vec{u}_2)\cdot(\vec{v}_1+\vec{v}_2)=\langle u_{1_1}+u_{2_1},...,u_{1_n}+u_{2_n}\rangle\cdot\langle v_{1_1}+v_{2_1},...,v_{1_n}+v_{2_n}\rangle\\ =(u_{1_1}+u_{2_1})(v_{1_1}+v_{2_1})+\cdots+(u_{1_n}+u_{2_n})(v_{1_n}+v_{2_n})\\ =u_{1_1}v_{1_1}+u_{1_1}v_{2_1}+u_{2_1}v_{1_1}+u_{2_1}v_{2_1}+\cdots+u_{1_n}v_{1_n}+u_{1_n}v_{2_n}+u_{2_n}v_{1_n}+u_{2_n}v_{2_n}\\ =\vec{u}_1\cdot\vec{v}_1+\vec{u}_1\cdot\vec{v}_2+\vec{u}_2\cdot\vec{v}_1+\vec{u}_2\cdot\vec{v}_2


Triangle Inequality

For any u,vRn\vec{u},\vec{v}\in\mathbb{R}^n,
u+vu+v|\vec{u}+\vec{v}|\le|\vec{u}|+|\vec{v}|
with equality iff one vector is a nonnegative scalar multiple of the other
Proof later

Cauchy-Schwarz Inequality

For any u,vRn\vec{u},\vec{v}\in\mathbb{R}^n
uvuvuv\vec{u}\cdot\vec{v}\le|\vec{u\cdot}\vec{v}|\le|\vec{u}||\vec{v}|
Equality uv=uv|\vec{u}\cdot\vec{v}|=|\vec{u}||\vec{v}| iff one vector is a scalar multiple of the other
Equality uv=uv\vec{u}\cdot\vec{v}=|\vec{u}||\vec{v}| iff one vector is a nonnegative scalar multiple of the other
uvuv\vec{u}\cdot\vec{v}\le |\vec{u}\cdot\vec{v}| should be obvious

Proof of Cauchy-Schwarz

Let u,vRn\vec{u},\vec{v}\in\mathbb{R}^n and tRt\in\mathbb{R}.
By the trivial inequality, u+tv20|\vec{u}+t\vec{v}|^2\ge0
Using properties of dot products and expanding,
u+tv2=(u+tv)(u+tv)=u2+2tuv+v2=u22tuv+t2v20|\vec{u}+t\vec{v}|^2=(\vec{u}+t\vec{v})\cdot(\vec{u}+t\vec{v})=|\vec{u}|^2+2t\vec{u}\cdot\vec{v}+|\vec{v}|^2=|\vec{u}|^2-2t\vec{u}\cdot\vec{v}+t^2|\vec{v}|^2\ge0
Now consider this as a quadratic equation in tt. This implies there are 00 or 11 real roots, thus the discriminant is nonpositive:
(2uv)24u2v20(2\vec{u}\cdot\vec{v})^2-4|\vec{u}|^2|\vec{v}|^2\le0
Rearranging and simplifying gives
uvuv|\vec{u}\cdot\vec{v}|\le|\vec{u}||\vec{v}|
Case when one vector is a multiple of the other or either is 0\vec{0} is trivial.

Proof of Triangle Inequality

Since all numbers are nonnegative, the inequality holds iff ifs square holds:
u+v2(v+u)2|\vec{u}+\vec{v}|^2\le(|\vec{v}|+|\vec{u}|)^2
Left hand side implies
u2+2uv+v2|\vec{u}|^2+2\vec{u}\cdot\vec{v}+|\vec{v}|^2
Right hand side imples
u2+2uv+v2|\vec{u}|^2+2|\vec{u}||\vec{v}|+|\vec{v}|^2
The inequality simplies to
uvuv\vec{u}\cdot\vec{v}\le|\vec{u}||\vec{v}|
This is true by Cauchy-Schwarz, so the Triangle Inequality must be true. By the equality condition of Cauchy-Schwarz, equality holds iff one vector is a nonnegative scalar multiple of the other.


Angles

Angle between u,vR\vec{u},\vec{v}\in\mathbb{R} is:
θ=arccos(uvuv)\theta=\arccos(\frac{\vec{u}\cdot\vec{v}}{|\vec{u}||\vec{v}|})
Note: by Cauchy-Schwarz, the argument of arccos\arccos is between 1-1 and 11

Consider triangle formed by v,u,uv\vec{v},\vec{u},\vec{u-v}
Use law of cosines and simplify

If θ=0\theta=0, uv\vec{u}\parallel\vec{v} (parallel)
If θ=π/2\theta=\pi/2, uv\vec{u}\perp\vec{v} (orthogonal)
If θ=π\theta=\pi, uv\vec{u}\parallel\vec{v} in opposite directions
\rightarrow Vectors from R\mathbb{R} are orthogonal iff their dot product is 00


Relations and Partitions

return to Set Theory
relation between things (eg '<<' or '==')
binary relation on set AA is a set of ordered pairs of elements of AA

Examples

Equivalence Relation:
relation showing two objects are alike in some way
must satisfy:

on the integers, '==' is an equivalence relation, '<<' does not satisfy reflexivity or symmetry, 'nearer than 10' fails transitivity

Partitions

in the 'same sign' relation, three kinds of pairs:

Example

Ω{n/d|n,dZ,d0}\Omega-\left\{n/d\middle|n,d\in\mathbb{Z},d\ne0\right\}
Define Sn,d:n^/d^Sn,d if n^d=nd^S_{n,d}:\hat{n}/\hat{d}\in S_{n,d}\text{ if } \hat{n}d=n\hat{d}
This partitions the set into equivalent fractions; i.e. "1/2" = "2/4"

Each part of a partition is an equivalence class, from which one is sometimes picked to be the class representative called canonical representative if there is some natural scheme (eg simplest/reduced form of a fraction)

Summary Section:


Gauss-Jordan Reduction

extension of Gauss's Method

Instead of using back substitution from echelon form, make leading coefficients 11 and continue to use row operations to eliminate upwards- reduced row echelon form

(all columns with a pivot are canceled) Example

Solve the following system of equations
3x2y+z=7x+y3z=62x2y3z=73x-2y+z=7\\x+y-3z=-6\\2x-2y-3z=-7


The resulting augmented matrix is
(321711362237)\left(\begin{array}{ccc|c}3&-2&1&7\\1&1&-3&-6\\2&-2&-3&-7\end{array}\right)
First get it to echelon form
ρ1ρ2(113632172237)2ρ1+ρ33ρ1+ρ2(11360510250435)4/5ρ2+ρ3(113605102500515)\xrightarrow{\rho_1\leftrightarrow\rho_2}\left(\begin{array}{ccc|c}1&1&-3&-6\\3&-2&1&7\\2&-2&-3&-7\end{array}\right)\xrightarrow[-2\rho_1+\rho_3]{-3\rho_1+\rho_2}\left(\begin{array}{ccc|c}1&1&-3&-6\\0&-5&10&25\\0&-4&3&5\end{array}\right)\xrightarrow{-4/5\rho_2+\rho_3}\left(\begin{array}{ccc|c}1&1&-3&-6\\0&-5&10&25\\0&0&-5&-15\end{array}\right)
Next, make the leading coefficients 1
1/5ρ31/5ρ2(113601250013)\xrightarrow[-1/5\rho_3]{-1/5\rho_2}\left(\begin{array}{ccc|c}1&1&-3&-6\\0&1&-2&-5\\0&0&1&3\end{array}\right)
Finally, use the pivots to clear out each row
2ρ3+ρ23ρ3+ρ1(110301010013)ρ2+ρ1(100201010013)\xrightarrow[2\rho_3+\rho_2]{3\rho_3+\rho_1}\left(\begin{array}{ccc|c}1&1&0&3\\0&1&0&1\\0&0&1&3\end{array}\right)\xrightarrow{-\rho_2+\rho_1}\left(\begin{array}{ccc|c}1&0&0&2\\0&1&0&1\\0&0&1&3\end{array}\right)
This shows the solution is
(xyz)=(213)\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}2\\1\\3\end{pmatrix}

pivoting on an entry: using the entry to clear out the rest of a column (eg using bottom row zz to clear out all other zz)

"Reduces to" is an equivalence
matrix AA reduces to matrix BB if BB is obtained from AA using successive elementary row operations given by Gauss's Method

Proof of Equivalence Reflexivity: a matrix reduces to itself if you apply no operators, so AAA\sim A Symmetry: if AA reduces to BB, then BB can be reduced to AA by applying the opposite of the row operations: swapping can be reversed by swapping again rescaling by a factor of kk can be reversed by rescaling by a factor of 1/k1/k row combination can be reversed by subtracting if added or adding if subtracted Transitivity: if AA reduces to BB after a set of row operations and BB reduces to CC after a set of row operations, then AA reduces to CC after the combined set of row operations Thus, all three conditions are satisfied, and "reduces to" is an equivalence
\rightarrow matrices that reduce to each other are row equivalent

Linear Combination Lemma

Reduction steps (Gauss's method) takes linear combinations of the rows
A linear combination of linear combinations is a linear combination

Proof

Given set c1,1x1++c1,nxnc_{1,1}x_1+\cdots+c_{1,n}x_n through cm,1x1++cmnxnc_{m,1}x_1+\cdots+c_{m_n}x_n, consider linear combination
d1(c1,1x1++c1,nxn)++dm(cm,1x1++cm,nxn)d_1(c_{1,1}x_1+\cdots+c_{1,n}x_n)+\cdots+d_m(c_{m,1}x_1+\cdots+c_{m,n}x_n)
this can be rearranged to
(d1c1,1++dmcm,1)x1++(d1c1,n++dmcm,n)xn(d_1c_{1,1}+\cdots+d_mc_{m,1})x_1+\cdots+(d_1c_{1,n}+\cdots+d_mc_{m,n})x_n
which is a linear combination

This implies each row in a reduced matrix is a linear combination of the rows of the first

Proof Outline

Use induction. The base step is 00 steps.
Assume everything after kk steps is a linear combination, then show k+1k+1 must still be a linear combiantion after any of the three moves.
Essentially, show linear combination \rightarrow linear combination after any of the steps.

In short: Gauss's method takes linear combinations of the rows to eliminate any linear relationship amont them

Reduced echelon form is unique
Two row equivalent matrices will yield the same reduced echelon form matrix
\rightarrow it is the canonical form